blackbox model
- North America > United States (0.36)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Information Technology > Security & Privacy (0.68)
- Government > Military (0.68)
- Government > Regional Government > North America Government > United States Government (0.36)
- North America > United States (0.35)
- North America > Canada (0.04)
- Information Technology > Security & Privacy (0.68)
- Government > Military (0.68)
- Government > Regional Government > North America Government > United States Government (0.35)
Domain-aware Control-oriented Neural Models for Autonomous Underwater Vehicles
Cortez, Wenceslao Shaw, Vasisht, Soumya, Tuor, Aaron, Drgoňa, Ján, Vrabie, Draguna
Conventional physics-based modeling is a time-consuming bottleneck in control design for complex nonlinear systems like autonomous underwater vehicles (AUVs). In contrast, purely data-driven models, though convenient and quick to obtain, require a large number of observations and lack operational guarantees for safety-critical systems. Data-driven models leveraging available partially characterized dynamics have potential to provide reliable systems models in a typical data-limited scenario for high value complex systems, thereby avoiding months of expensive expert modeling time. In this work we explore this middle-ground between expert-modeled and pure data-driven modeling. We present control-oriented parametric models with varying levels of domain-awareness that exploit known system structure and prior physics knowledge to create constrained deep neural dynamical system models. We employ universal differential equations to construct data-driven blackbox and graybox representations of the AUV dynamics. In addition, we explore a hybrid formulation that explicitly models the residual error related to imperfect graybox models. We compare the prediction performance of the learned models for different distributions of initial conditions and control inputs to assess their accuracy, generalization, and suitability for control.
- North America > United States > Washington > Benton County > Richland (0.04)
- North America > United States > Massachusetts (0.04)
- Energy (0.94)
- Government > Regional Government (0.46)
Context-Aware Transfer Attacks for Object Detection
Cai, Zikui, Xie, Xinxin, Li, Shasha, Yin, Mingjun, Song, Chengyu, Krishnamurthy, Srikanth V., Roy-Chowdhury, Amit K., Asif, M. Salman
Blackbox transfer attacks for image classifiers have been extensively studied in recent years. In contrast, little progress has been made on transfer attacks for object detectors. Object detectors take a holistic view of the image and the detection of one object (or lack thereof) often depends on other objects in the scene. This makes such detectors inherently context-aware and adversarial attacks in this space are more challenging than those targeting image classifiers. In this paper, we present a new approach to generate context-aware attacks for object detectors. We show that by using co-occurrence of objects and their relative locations and sizes as context information, we can successfully generate targeted mis-categorization attacks that achieve higher transfer success rates on blackbox object detectors than the state-of-the-art. We test our approach on a variety of object detectors with images from PASCAL VOC and MS COCO datasets and demonstrate up to $20$ percentage points improvement in performance compared to the other state-of-the-art methods.
- North America > United States > California > Riverside County > Riverside (0.04)
- Asia > Middle East > Jordan (0.04)
- Asia > Middle East > UAE (0.04)
- Government > Military (0.72)
- Information Technology > Security & Privacy (0.68)
Zillow utilizes explainer AI, data to revolutionize how people sell houses
Join executive leaders at the Conversational AI & Intelligent AI Assistants Summit, presented by Five9. Zillow has been a big name for online home seekers. There have been more than 135 million homes listed on the platform, and the company has streamlined the real estate transaction process from home loans, title, and buying. It says AI has been at the heart of success in providing customized search functions, product offerings, and accurate home valuations -- with a claimed median error rate of less than 2%. Zillow's initial forays into AI in 2005 centered around blackbox models for prediction and accuracy, Stan Humphries, chief analytics officer at Zillow, said at VentureBeat's virtual Transform 2021 conference on Tuesday.
Interpretability of Blackbox Machine Learning Models through Dataview Extraction and Shadow Model creation
Patir, Rupam, Singhal, Shubham, Anantaram, C., Goyal, Vikram
Deep learning models trained using massive amounts of data, tend to capture one view of the data and its associated mapping. Different deep learning models built on the same training data may capture different views of the data based on the underlying techniques used. For explaining the decisions arrived by blackbox deep learning models, we argue that it is essential to reproduce that models view of the training data faithfully. This faithful reproduction can then be used for explanation generation. We investigate two methods for data view extraction: hill-climbing approach and a GAN-driven approach. We then use this synthesized data for creating shadow models for explanation generation: Decision-Tree model and Formal Concept Analysis based model. We evaluate these approaches on a Blackbox model trained on public datasets and show its usefulness in explanation generation.
How to find Feature importances for BlackBox Models?
Data Science is the study of algorithms. I grapple through with many algorithms on a day to day basis, so I thought of listing some of the most common and most used algorithms one will end up using in this new DS Algorithm series. How many times it has happened when you create a lot of features and then you need to come up with ways to reduce the number of features? Last time I wrote a post titled "The 5 Feature Selection Algorithms every Data Scientist should know" in which I talked about using correlation or tree-based methods and adding some structure to the process of feature selection. Recently I got introduced to another novel way of feature selection called Permutation Importance and really liked it.
Machine Learning Interpretability: Explaining Blackbox Models with LIME
This is the second part of our series about Machine Learning interpretability. We want to describe LIME (Local Interpretable Model-Agnostic Explanations), a popular technique to explain blackbox models. It was proposed by Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin in their paper Why Should I Trust You? Explaining the Predictions of Any Classifier, which they first presented at the ACM's Conference on Knowledge Discovery and Data Mining in 2016. Please check out our previous article if you are not familiar with the concept of interpretability. We previously made a distinction between model-specific and model-agnostic techniques as well as between global and local techniques.